meredith whittaker
Signal's Meredith Whittaker: 'These are the people who could actually pause AI if they wanted to'
Meredith Whittaker is the president of Signal – the not-for-profit secure messaging app. The service, along with WhatsApp and similar messaging platforms, is opposing the UK government's online safety bill which, among other things, seeks to scan users' messages for harmful content. Prior to Signal, Whittaker worked at Google, co-founded NYU's AI Now Institute and was an adviser to the Federal Trade Commission. After 10 years at Google you organised the walkout over the company's attitude to sexual harassment accusations, after which in 2019 you were forced out. How did you feel about that?
- Law (1.00)
- Information Technology > Services (1.00)
- Government > Regional Government > North America Government > United States Government (0.92)
The Companies Profiting From A.I. Are Profiting From A.I. Panic
Over the past few weeks, there's been some very public hand-wringing about artificial intelligence--a lot of it coming from people who have made A.I. their life's work. Geoffrey Hinton, dubbed the "godfather of A.I.," recently left his job at Google to embark upon a sort of media tour warning about the dangers of the technology. There was a public letter from Elon Musk and others calling for a pause in A.I. development and an essay in Time from theorist Eliezer Yudkowsky saying generative A.I. can harm humanity--or even end it. On Friday's episode of What Next: TBD, I spoke with Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at NYU, to sort through the real threat of A.I. and what the doomerism discourse is missing. Our conversation has been edited and condensed for clarity. What do you make of the concerns raised by Geoffrey Hinton and others when it comes to A.I. safety?
- North America > United States > California (0.04)
- North America > United States > Arizona (0.04)
- Asia > China (0.04)
- Law (0.71)
- Information Technology > Services (0.49)
The Dark Side of Big Tech's Funding for AI Research
Last week, prominent Google artificial intelligence researcher Timnit Gebru said she was fired by the company after managers asked her to retract or withdraw her name from a research paper, and she objected. Google maintains that she resigned, and Alphabet CEO Sundar Pichai said in a company memo on Wednesday that he would investigate what happened. The episode is a pointed reminder of tech companies' influence and power over their field. Big companies pump out influential research papers, fund academic conferences, compete to hire top researchers, and own the data centers required for large-scale AI experiments. A recent study found that the majority of tenure-track faculty at four prominent universities that disclose funding sources had received backing from Big Tech.
- North America > United States > New York (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
Ex-Googler Meredith Whittaker on Political Power in Tech, the Flaws of 'The Social Dilemma,' and…
OneZero is partnering with the Big Technology Podcast from Alex Kantrowitz to bring readers exclusive access to interview transcripts with notable figures in and around the tech industry. This week, Kantrowitz sits down with Meredith Whittaker, an A.I. researcher who helped lead Google's employee walkout in 2018. This interview, which took place at World Summit A.I, has been edited for length and clarity. To subscribe to the podcast and hear the interview for yourself, you can check it out on Apple Podcasts, Spotify, and Overcast. When I interviewed Tristan Harris about The Social Dilemma earlier this month, my mentions filled with people saying, "You should speak to the people who were critical of the social web long before the film." One name stood out: Meredith Whittaker. An A.I. researcher and former Big Tech employee, Whittaker helped lead Google's walkout in 2018 amid a season of activism inside the company. On this edition of the Big Technology Podcast, we spoke not only about her views on the film, but also of the future of workplace activism inside tech companies in a moment where some are questioning if it belongs at all. Alex Kantrowitz: It seems like your perspective on The Social Dilemma is a little bit different from Tristan's.
- Media (1.00)
- Law (1.00)
- Information Technology (1.00)
- (2 more...)
'People fix things. Tech doesn't fix things.' – TechCrunch
Veena Dubal is an unlikely star in the tech world. A scholar of labor practices regarding the taxi and ride-hailing industries and an Associate Professor at San Francisco's U.C. Hastings College of the Law, her work on the ethics of the gig economy has been covered by the New York Times, NBC News, New York Magazine, and other publications. She's been in public dialogue with Naomi Klein and other famous authors, and penned a prominent op-ed on facial recognition tech in San Francisco -- all while winning awards for her contributions to legal scholarship in her area of specialization, labor and employment law. At the annual symposium of the AI Now Institute, an interdisciplinary research center at New York University, Dubal was a featured speaker. The symposium is the largest annual public gathering of the NYU-affiliated research group that examines AI's social implications.
- North America > United States > New York (0.47)
- North America > United States > California > San Francisco County > San Francisco (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Law (1.00)
- Transportation > Passenger (0.88)
- Transportation > Ground > Road (0.50)
"People fix things. Tech doesn't fix things." – TechCrunch
Veena Dubal is an unlikely star in the tech world. A scholar of labor practices regarding the taxi and ride-hailing industries and an Associate Professor at San Francisco's U.C. Hastings College of the Law, her work on the ethics of the gig economy has been covered by the New York Times, NBC News, New York Magazine, and other publications. She's been in public dialogue with Naomi Klein and other famous authors, and penned a prominent op-ed on facial recognition tech in San Francisco -- all while winning awards for her contributions to legal scholarship in her area of specialization, labor and employment law. At the annual symposium of the AI Now Institute, an interdisciplinary research center at New York University, Dubal was a featured speaker. The symposium is the largest annual public gathering of the NYU-affiliated research group that examines AI's social implications.
- North America > United States > New York (0.47)
- North America > United States > California > San Francisco County > San Francisco (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Law (1.00)
- Transportation > Passenger (0.88)
- Transportation > Ground > Road (0.50)
How will AI change your life? AI Now Institute founders Kate Crawford and Meredith Whittaker explain.
Ask a layman about artificial intelligence and they might point to sci-fi villains such as HAL from 2001: A Space Odyssey or the Terminator. But the co-founders of the AI Now Institute, Meredith Whittaker and Kate Crawford, want to change the conversation. Instead of talking about far-flung super-intelligent AI, they argued on the latest episode of Recode Decode, we should be talking about the ways AI is affecting people right now, in everything from education to policing to hiring. Rather than killer robots, you should be concerned about what happens to your résumé when it hits a program like the one Amazon tried to build. "They took two years to design, essentially, an AI automatic résumé scanner," Crawford said. "And they found that it was so biased against any female applicant that if you even had the word'woman' on your résumé that it went to the bottom of the pile." That's a classic example of what Crawford calls "dirty data." Even though people think of algorithms as being ...
- North America > United States > California (0.14)
- Asia > China (0.05)
- North America > United States > New York (0.04)
- (4 more...)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (3 more...)
Recap: Artificial Intelligence Now #AINow – Microsoft New York
This week, the White House and New York University's Information Law Institute hosted Artificial Intelligence Now, a symposium exploring the impacts of artificial intelligence (AI) technologies across social and economic systems. We were pleased and honored to have Kate Crawford, Principal Researcher, Microsoft Research and Senior Research Fellow, New York University Information Law Institute represent Microsoft as she joined to discuss social inequality, labor, healthcare, and ethics in AI technologies. The symposium focused on the near future (5-10 years) in technology, with input from leaders in technology, industry, academia, and civil society. We've gathered some of the best moments from the symposium -- in tweets -- below: "Sorry, we're not going to be talking about the singularity tonight" says Kate Crawford – focusing on today's real challenges instead #AINow Tech moves so fast and policy moves so slow – there is a mismatch. I want every drop of benefit we can squeeze out" with oversight & protections.